38 research outputs found

    Distributed Object Medical Imaging Model

    Get PDF
    Abstract- Digital medical informatics and images are commonly used in hospitals today,. Because of the interrelatedness of the radiology department and other departments, especially the intensive care unit and emergency department, the transmission and sharing of medical images has become a critical issue. Our research group has developed a Java-based Distributed Object Medical Imaging Model(DOMIM) to facilitate the rapid development and deployment of medical imaging applications in a distributed environment that can be shared and used by related departments and mobile physiciansDOMIM is a unique suite of multimedia telemedicine applications developed for the use by medical related organizations. The applications support realtime patients’ data, image files, audio and video diagnosis annotation exchanges. The DOMIM enables joint collaboration between radiologists and physicians while they are at distant geographical locations. The DOMIM environment consists of heterogeneous, autonomous, and legacy resources. The Common Object Request Broker Architecture (CORBA), Java Database Connectivity (JDBC), and Java language provide the capability to combine the DOMIM resources into an integrated, interoperable, and scalable system. The underneath technology, including IDL ORB, Event Service, IIOP JDBC/ODBC, legacy system wrapping and Java implementation are explored. This paper explores a distributed collaborative CORBA/JDBC based framework that will enhance medical information management requirements and development. It encompasses a new paradigm for the delivery of health services that requires process reengineering, cultural changes, as well as organizational changes

    Transformation of Sequential Programs into Parallel Forms

    Get PDF
    One of the main tasks of a programmer when writing parallel programs is to identify the parts that are to be executed in parallel. This process is very time consuming and error prone. As an alternative, one can write its sequential version and then transform it into the parallel form by a parallelizing compiler. The loops in the sequential programs offer the best opportunities for parallelism. This paper presents the transformation techniques that can be applied to sequential programs, especially the loops, in order to parallelize them. These techniques are based on the Bernstein sets

    Top-down Heuristic for Finding Optimal Grain Size of Parallel Tasks

    Get PDF
    In order to have an optimal execution time of a program running on a multiprocessor system, the program has to be partitioned into concurrent tasks. Partitioning of programs to grain size suitable for parallel execution is an NP complete problem but near-optimal time can be derived. This paper discusses a heuristic to determine the near-optimal grain size of parallel tasks that will give the best execution time. The effects of communication overheads between the different processors are examined. The heuristic developed is capable of balancing between maximizing parallelism and minimizing overheads

    Using metadata analysis and base analysis techniques in DQ framework for DW

    Get PDF
    Data Qualities (DQ) issues become a major concern over decade.Even with an enhancements and introduction of new technologies, it is stills not good if systems are lack of qualities of data.Data warehouses (DW) are complex systems that have to deliver highly- aggregated, high quality data from heterogeneous sources to decision makers. It involves a lot of integration of sources system to support business operations.This paper propose a framework for implementing DQ in DW systems architecture using Metadata Analysis Technique and Base Analysis Techniques to perform comparison between target value and current value gain from the systems.A prototype using PHP is develops to support Base Analysis Techniques. This paper also emphasize on dimension need to be consider to perform DQ processes

    Pro-active QoS resource management schemes for future integrated packet-switched networks

    Get PDF
    In this research two pro-active dynamic QoS resource management schemes are designed, namely the dynamic QoS control scheme with delay estimation, and the hybrid dynamic QoS control scheme. In both schemes, every new packet arrival is compared against the computed estimated delay it will experience, prior to being admitted into the buffer. If the computed estimated delay expires the requested delay bound, then the packet is dropped. In the hybrid scheme, every packet is first assessed for the estimated delay prior to being admitted into the buffer, subsequently the packets which have been successfully admitted into the buffer are evaluated on the actual delay experienced before being transmitted to the receiver. The paper studies the performance of the two proposed schemes with a dynamic resource management scheme, known as the OCcuPancy_Adjusting (OCP_A). The results obtained through the simulation models show that the proposed schemes have significantly improved the average delay for different traffic patterns. In addition to improving the average delay in delay sensitive traffic, improvement is seen in the average packet loss ratio, and subsequently increasing the throughput of delay sensitive traffic

    Implementation of parallel boundary integral method on spherical bubble dynamics using shared memory computer

    Get PDF
    The boundary integral method is employed to model the dynamic behavior of the 3D spherical bubble. It has been solved on the Sequent Symmetry S5000 SE30 computer to better understand the opportunities and challenges the parallel processing presents. Analyses of the parallel performance of the approximation to the potential at certain external points as well as the normal derivatives of the potential on the surface of the bubble were generated using linear representations of the surface and the functions. In these calculations, 4, 6 and 8 Gauss points were used in the integration on 4, 8, 16 32 and 64 segments. Results from this study demonstrate that parallel computing greatly conserves the computational effort and is shown to be an effective tool for several problems related to bubble dynamics

    The success factors for government information sharing (GIS) in natural disaster management and risk reduction

    Get PDF
    The frequency of natural disasters has been increasing for the last 30 years in the world, having caused great damages/losses. Among those damages/losses, about 90 % are concentrated in the Asian region where natural disasters are one of the serious issues not only for humanitarian but also for economic and industrial point of view. These bring about the loss of lives, property, employment and damage to the physical infrastructure and the environment. Disaster management (DM) including risk reduction efforts aim to minimize or avoid the potential losses from hazards, assure prompt and appropriate assistance to victims of disaster, and achieve rapid and effective recovery. While information, knowledge and resources sharing can enhance the process of DM, there is a perceived gap in government collaboration and coordination within the context of natural DM. Identifying potential success factors will be an enabler in managing disasters. The objective of this paper is to present the literature findings on success factors that ensure government information sharing quality in supporting effectively DM. Accordingly the identified factors were classified into major categories, namely political leadership support, inter-agency collaboration, individual agency capacity including ICT, and agency benefits

    Design of simulation system for performance predictions of WDM single-hop networks

    Get PDF
    Describes the design, development and use of a software architecture for a simulation environment to examine, validate and predict the performance of piggybacked token passing protocol for a wavelength division multiplexed (WDM) optical network. This simulation environment overcomes many of the limitations found with analytical models. A set of the principal components and their dynamics, which make up the simulation design has been identified. It is shown that this protocol optimises the usage of the bandwidth available in the optical fibre with more than 70% used for data transmission. It is also suggested that the number of channels required to accomplish a single-hop connection within a local environment is small with number of channels to nodes ratio of 1:4. This is comparatively small and requires only limited-tuneable transceivers

    A fuzzy channel allocation scheme for the WDM hierarchical all-optical network

    Get PDF
    The fuzzy means of allocating WDM channels in a hierarchical all-optical network (AON) for the modified token medium access protocol is addressed. The goal is to minimise the average delay of local subnet and global bound traffic, and to maximise the number of nodes that can be supported by the network. This is achieved by allotting a minimum number of spatially-reuse channels to the subnets, which can accommodate a certain maximum number of nodes. Actually the minimum number of nodes that are sought for each subnet in terms of cost. By working out the maximum number of nodes for each subnet and the total subnets that can be supported, the optimum number of global channels and the overall total number of nodes for the entire network, can hence be determined. The packet generation rate and average delay in slot time are used to gauge the performance of the fuzzy channel allocation model
    corecore